First of all I would like to say some thanks to the X.org community. Their work is awesome, and the fact I can make my setup work on entirely X.org components it’s something I never thought possible when XFree86 was still around. I personally think that looking at an Optimus laptop with Intel and Nouveau running is a tremendous achievement.
My laptop at work is a Dell Latitude E6430. Comes loaded with features and I really like it. Among the various features there’s the fact that this is an Nvidia Optimus enabled laptop, sporting both an Intel video card and an Nvidia one:
$ lspci | grep -i vga
00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09)
01:00.0 VGA compatible controller: NVIDIA Corporation GF108GLM [NVS 5200M] (rev a1)
This one is a muxless laptop of the worst kind: video outputs are connected only to specific chips!
LVDS (Internal panel) | Intel |
VGA (not usable along with the docking station one) | Intel |
VGA (Docking station) | Intel |
DVI | Nvidia |
DVI (Docking station) | Nvidia |
DisplayPort (Docking station) | Nvidia |
HDMI | Nvidia |
So to use an external HDMI connection at home you need to drive it through the Nvidia card, it doesn’t matter if Optimus is enabled or not. I regularly use it docked with the lid closed, external keyboard and mouse and 2 external monitors connected to the VGA and DVI outputs of the docking station. Basically while I’m at the office it looks like a normal desktop computer; but sometime I need to disconnect it to go on a meeting; and sometimes I use it at home to play games as well.
Guess what? Free drivers, proprietary drivers, UEFI, UEFI secure boot, multi monitor, outputs changing on the fly… all sorts of fun! I’m impressed by the fact that it all works together.
There are four modes on which I can operate the system:
- Optimus enabled, free drivers for both Intel and Nvidia cards
- Optimus enabled, free driver for Intel and proprietary driver for the Nvidia card
- Optimus disabled, free driver for the Nvidia card
- Optimus disabled, proprietary driver for the Nvidia card
Each one has its drawbacks, so let’s explain each setup a bit. At the end of the post I’ve made a table with all the pros and cons of each solution.
My current setup is:
- Fedora 19 x86_64
- Kernel 3.11.1 (stock Fedora)
- Nouveau DDX 1.0.9 (stock Fedora)
- Intel DDX 2.21.12 (stock Fedora)
- Nvidia proprietary drivers 325.15 (from my repository)
- VDPAU library 0.7 (stock Fedora)
- Mesa libraries 9.2 (20130919 prerelease, stock Fedora)
Table of Contents
UEFI / legacy bios
If secure boot is enabled; there’s no way to use the proprietary Nvidia driver without fiddling with UEFI keys. The module is built separately from the kernel package; so there’s no way for it to have the same signature as the kernel.
When UEFI is enabled, the free drivers work fine and replace the efifb
framebuffer driver with their own; thus giving proper modesetting at the correct resolution and a speedy and responsive terminal.
With the proprietary Nvidia driver, the efifb
is not replaced; so the console still operates with it and the Nvidia driver only operates the X part. Unfortunately, using this method, the framebuffer console is slow as hell, the resolution is not optimal, and the EFI framebuffer is never exposed onto external monitors. In my case, pressing CTRL+ALT+Fx jumps me to the console that is shown in the closed laptop lid on the docking station; making it pretty useless.
So if you’re going to use the proprietary driver and you often use the console; make sure you’re using Bios mode and not UEFI. What UEFI could bring you is the Intel Rapid Start Technology which has been included in kernel 3.11; so make your choices depending on what you need.
Optimus disabled
When Optimus is disabled, I can freely use the proprietary Nvidia driver or the free Nouveau driver.
Both solution work; unfortunately performance and feature wise Nouveau cannot compete with the proprietary Nvidia driver.
My main issue is power management; with the Nvidia driver the battery lasts a lot more and the performance difference is abysmal. Nouveau performance is really poor with 3D games (especially Steam commercial ones, with Doom 3 it works fine) and there’s absolutely no power management; at least on my laptop. By playing with performance levels I was only able to overheat the card!
Another thing that does not work with Nouveau is the docking station removal. With the Nvidia proprietary driver I’m able to do the following:
– Disconnect from the docking station: output goes from the external VGA and DVI monitors to the internal LVDS display.
– Reconnect to the docking station: internal LVDS display gets shut off and output goes to VGA and DVI monitors as they were before; one next to the other. I can even close the lid and the computer doesn’t go in standby.
With Nouveau, I’m able to disconnect from the docking station but when reconnecting I need to reconfigure the monitors in their place; and after this, when closing the lid I need to wake up again the computer because it goes on standby.
With the recent Xrandr support to the proprietary drivers I don’t even need to edit che X.org configuration file. Whether I use nvidia-settings or Gnome Displays panel the result is reflected in both implementations and preserved across boots.
Optimus with proprietary Nvidia drivers
To configure Optimus with proprietary drivers perform the following. First of all install the proprietary driver as normal. Now edit the /etc/grub2.cfg
file and remove some parameters from the kernel command line. This is required because the Intel driver still need to operate with its KMS driver. So, from this:
nouveau.modeset=0 rd.driver.blacklist=nouveau nomodeset gfxpayload=vga=normal
you should go to this:
nouveau.modeset=0 rd.driver.blacklist=nouveau
After this, edit/recreate the /etc/X11/xorg.conf
file with the following contents:
Section "ServerLayout"
Identifier "layout"
Screen 0 "nvidia"
Inactive "intel"
EndSection
Section "Device"
Identifier "intel"
Driver "intel"
EndSection
Section "Screen"
Identifier "intel"
Device "intel"
EndSection
Section "Device"
Option "ConstrainCursor" "no"
Identifier "nvidia"
Driver "nvidia"
BusID "PCI:1:0:0"
EndSection
Section "Screen"
Identifier "nvidia"
Device "nvidia"
#Option "UseDisplayDevice" "none"
EndSection
Make sure to set the correct bus ID for the Nvidia card; for instructions look in the Nvidia documentation. Contrary to what’s written in the Nvidia documentation I had to use the intel
DDX driver for the Intel card instead of the modesetting
one. With modesetting
I’m not able to get any output on the Intel card.
Upon reboot, you will see KMS running for the Intel card (Plymouth screen) and then the login manager appears on the Nvidia attached panels, while the Intel outputs shut off.
After logging in, you can also check that both drivers are running with the following commands:
$ lsmod | egrep "i915|nvidia"
nvidia 9365874 51
i915 651861 2
i2c_algo_bit 13257 1 i915
drm_kms_helper 50239 1 i915
drm 274480 5 i915,drm_kms_helper,nvidia
i2c_core 34242 7 drm,i915,i2c_i801,drm_kms_helper,i2c_algo_bit,nvidia,videodev
video 19104 1 i915
$ dmesg | egrep -i "i915|nvidia"
[ 4.589447] nvidia: module license 'NVIDIA' taints kernel.
[ 4.595759] nvidia: module verification failed: signature and/or required key missing - tainting kernel
[ 4.601728] nvidia 0000:01:00.0: enabling device (0004 -> 0007)
[ 4.613153] [drm] Initialized nvidia-drm 0.0.0 20130102 for 0000:01:00.0 on minor 0
[ 4.613159] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 325.15 Wed Jul 31 18:50:56 PDT 2013
[ 4.738199] i915 0000:00:02.0: setting latency timer to 64
[ 4.768878] i915 0000:00:02.0: irq 48 for MSI/MSI-X
[ 5.088964] i915 0000:00:02.0: fb0: inteldrmfb frame buffer device
[ 5.088966] i915 0000:00:02.0: registered panic notifier
[ 5.088982] i915: No ACPI video bus found
[ 5.420554] [drm] Initialized i915 1.6.0 20080730 for 0000:00:02.0 on minor 1
[ 5.966583] nvidia 0000:01:00.0: irq 50 for MSI/MSI-X
[ 198.017862] nvidia 0000:01:00.0: irq 50 for MSI/MSI-X
To light up the other display some xrandr command is required (to enable these at boot add them in /etc/X11/xinit/xinitrc.d
):
$ xrandr --setprovideroutputsource Intel NVIDIA-0
$ xrandr --auto
$ xrandr --output VGA1 --left-of DP-1
Your Intel monitor should now have an extended desktop managed by the Nvidia card. Move windows around, and launch some commands to see that wherever you go you’re using the Nvidia accelerated driver:
$ glxinfo| grep "OpenGL version string"
OpenGL version string: 4.3.0 NVIDIA 325.15
$ vdpauinfo | grep -i string
Information string: NVIDIA VDPAU Driver Shared Library 325.15 Wed Jul 31 18:14:57 PDT 2013
Everything seems to work, except output manipulation. Xrandr, Gnome and Nvidia drivers have a different view.
Xrandr view:
$ xrandr -q | grep conn
VGA-0 connected primary 1680x1050+0+0 (normal left inverted right x axis y axis) 474mm x 296mm
LVDS-0 connected (normal left inverted right x axis y axis)
DP-0 disconnected (normal left inverted right x axis y axis)
DP-1 connected 1680x1050+1680+0 (normal left inverted right x axis y axis) 474mm x 296mm
HDMI-0 disconnected (normal left inverted right x axis y axis)
DP-2 disconnected (normal left inverted right x axis y axis)
DP-3 disconnected (normal left inverted right x axis y axis)
This is what I have in the Nvidia settings panel and in the Gnome Displays panel for the monitors; in one case I don’t see any monitor, in another one I have the internal LVDS display shown as enabled while in reality is not and with the button locked in the “On” position:
Primary monitor assignment does not work as well. I usally have the Gnome panel on the left monitor. If I try to move it from the Nvidia output I get this feedback:
$ xrandr --output VGA1 --primary
X Error of failed request: BadMatch (invalid parameter attributes)
Major opcode of failed request: 139 (RANDR)
Minor opcode of failed request: 30 (RRSetOutputPrimary)
Serial number of failed request: 53
Current serial number in output stream: 55
Putting monitor problems aside, running in this mode does not really give any benefit compared to running it with Optimus disabled and the proprietary Nvidia driver installed. Both cards are running with power management, but the Nvidia card is never shut off, so it doesn’t use less power than when running standalone.
There’s no way to turn off the card with vga_switcheroo
, all 3d libraries come from the Nvidia drivers and your desktop is being rendered by the Nvidia card.
Prime (Optimus) with free Nouveau drivers
Here comes the juicy part. With enough maturity on the Nouveau side this would be the perfect setup. To start with this implementation; nothing is required, just install Fedora and everything should be already set up. Booting it shows the Plymouth logo on both outputs.
Login in the system, and check that both drivers are running:
$ lsmod | egrep "i915|nouveau"
nouveau 943445 1
i915 651861 4
mxm_wmi 12865 1 nouveau
ttm 79865 1 nouveau
i2c_algo_bit 13257 2 i915,nouveau
drm_kms_helper 50239 2 i915,nouveau
drm 274480 8 ttm,i915,drm_kms_helper,nouveau
i2c_core 34242 7 drm,i915,i2c_i801,drm_kms_helper,i2c_algo_bit,nouveau,videodev
wmi 18697 3 dell_wmi,mxm_wmi,nouveau
video 19104 2 i915,nouveau
$ dmesg | egrep "i915|nouveau"
[ 3.155259] i915 0000:00:02.0: setting latency timer to 64
[ 3.163318] nouveau 0000:01:00.0: enabling device (0004 -> 0007)
[ 3.185671] i915 0000:00:02.0: irq 45 for MSI/MSI-X
[ 3.517135] i915 0000:00:02.0: fb0: inteldrmfb frame buffer device
[ 3.517136] i915 0000:00:02.0: registered panic notifier
[ 3.517156] i915: No ACPI video bus found
[ 3.774151] [drm] Initialized i915 1.6.0 20080730 for 0000:00:02.0 on minor 0
[ 3.774654] nouveau [ DEVICE][0000:01:00.0] BOOT0 : 0x0c1e00a1
[ 3.774659] nouveau [ DEVICE][0000:01:00.0] Chipset: GF108 (NVC1)
[ 3.774663] nouveau [ DEVICE][0000:01:00.0] Family : NVC0
[ 3.778240] nouveau [ VBIOS][0000:01:00.0] checking PRAMIN for image...
[ 3.787999] nouveau [ VBIOS][0000:01:00.0] ... signature not found
[ 3.788002] nouveau [ VBIOS][0000:01:00.0] checking PROM for image...
[ 3.788086] nouveau [ VBIOS][0000:01:00.0] ... signature not found
[ 3.788087] nouveau [ VBIOS][0000:01:00.0] checking ACPI for image...
[ 4.624674] nouveau [ VBIOS][0000:01:00.0] ... appears to be valid
[ 4.624679] nouveau [ VBIOS][0000:01:00.0] using image from ACPI
[ 4.624845] nouveau [ VBIOS][0000:01:00.0] BIT signature found
[ 4.624850] nouveau [ VBIOS][0000:01:00.0] version 70.08.a8.00.8d
[ 4.625140] nouveau [ DEVINIT][0000:01:00.0] adaptor not initialised
[ 4.625144] nouveau [ VBIOS][0000:01:00.0] running init tables
[ 4.753512] nouveau [ PFB][0000:01:00.0] RAM type: GDDR5
[ 4.753514] nouveau [ PFB][0000:01:00.0] RAM size: 1024 MiB
[ 4.753515] nouveau [ PFB][0000:01:00.0] ZCOMP: 0 tags
[ 4.779859] nouveau [ PTHERM][0000:01:00.0] FAN control: none / external
[ 4.779863] nouveau [ PTHERM][0000:01:00.0] fan management: disabled
[ 4.779867] nouveau [ PTHERM][0000:01:00.0] internal sensor: yes
[ 4.783179] nouveau [ DRM] VRAM: 1024 MiB
[ 4.783180] nouveau [ DRM] GART: 1048576 MiB
[ 4.783184] nouveau [ DRM] TMDS table version 2.0
[ 4.783185] nouveau [ DRM] DCB version 4.0
[ 4.783199] nouveau [ DRM] DCB outp 00: 01000323 00010034
[ 4.783201] nouveau [ DRM] DCB outp 01: 020323a6 0f220010
[ 4.783202] nouveau [ DRM] DCB outp 02: 040433b6 0f220010
[ 4.783203] nouveau [ DRM] DCB outp 03: 08024382 00020010
[ 4.783204] nouveau [ DRM] DCB outp 04: 02032362 00020010
[ 4.783205] nouveau [ DRM] DCB outp 05: 04043372 00020010
[ 4.783206] nouveau [ DRM] DCB outp 06: 02011300 00000000
[ 4.783207] nouveau [ DRM] DCB conn 00: 00000041
[ 4.783209] nouveau [ DRM] DCB conn 01: 00000100
[ 4.783210] nouveau [ DRM] DCB conn 02: 00001246
[ 4.783211] nouveau [ DRM] DCB conn 03: 00002346
[ 4.783212] nouveau [ DRM] DCB conn 04: 00010461
[ 4.783213] nouveau [ DRM] DCB conn 05: 00000500
[ 4.783878] nouveau [ DRM] ACPI backlight interface available, not registering our own
[ 4.784072] nouveau [ DRM] 3 available performance level(s)
[ 4.784075] nouveau [ DRM] 0: core 50MHz shader 101MHz memory 135MHz voltage 830mV
[ 4.784076] nouveau [ DRM] 1: core 202MHz shader 405MHz memory 324MHz voltage 830mV
[ 4.784078] nouveau [ DRM] 3: core 672MHz shader 1344MHz memory 1569MHz voltage 980mV
[ 4.784079] nouveau [ DRM] c: core 202MHz shader 405MHz memory 324MHz
[ 4.789392] nouveau [ DRM] MM: using COPY0 for buffer copies
[ 4.925967] nouveau [ DRM] allocated 1680x1050 fb: 0x60000, bo ffff88021fc21400
[ 4.926065] nouveau 0000:01:00.0: fb1: nouveaufb frame buffer device
[ 4.926068] [drm] Initialized nouveau 1.1.1 20120801 for 0000:01:00.0 on minor 1
Poking around with xrandr will give you totally different outputs from the Nvidia driver:
$ xrandr -q | grep conn
LVDS1 connected (normal left inverted right x axis y axis)
VGA1 connected primary 1680x1050+0+0 (normal left inverted right x axis y axis) 474mm x 296mm
LVDS-2 disconnected (normal left inverted right x axis y axis)
DP-1 disconnected (normal left inverted right x axis y axis)
DP-2 connected 1680x1050+1680+0 (normal left inverted right x axis y axis) 474mm x 296mm
HDMI-1 disconnected (normal left inverted right x axis y axis)
VGA-2 disconnected (normal left inverted right x axis y axis)
But at least they’re consistent with the Gnome Displays panel:
For reasons I don’t understand the Nvidia card appears twice in 2 different but identical providers:
$ xrandr --listproviders
Providers: number : 3
Provider 0: id: 0x96 cap: 0xb, Source Output, Sink Output, Sink Offload crtcs: 3 outputs: 2 associated providers: 2 name:Intel
Provider 1: id: 0x66 cap: 0x7, Source Output, Sink Output, Source Offload crtcs: 2 outputs: 5 associated providers: 2 name:nouveau
Provider 2: id: 0x66 cap: 0x7, Source Output, Sink Output, Source Offload crtcs: 2 outputs: 5 associated providers: 2 name:nouveau
With the tests I made, there’s no apparent difference when using one or the other. Usage of one card or the other is driven by the DRI_PRIME
environment variable. If it’s set to 0
, commands run on the Intel card, if it’s set to 1
they will run on the Nvidia card. For example:
$ DRI_PRIME=1 vdpauinfo | grep -i string
Information string: G3DVL VDPAU Driver Shared Library version 1.0
Or even better, to check OpenGL status:
$ glxinfo | grep -e 'OpenGL.*string.*'
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) Ivybridge Mobile
OpenGL core profile version string: 3.1 (Core Profile) Mesa 9.2.0
OpenGL core profile shading language version string: 1.40
OpenGL version string: 3.0 Mesa 9.2.0
OpenGL shading language version string: 1.30
$ DRI_PRIME=1 glxinfo | grep -e 'OpenGL.*string.*'
OpenGL vendor string: nouveau
OpenGL renderer string: Gallium 0.4 on NVC1
OpenGL core profile version string: 3.1 (Core Profile) Mesa 9.2.0
OpenGL core profile shading language version string: 1.40
OpenGL version string: 3.0 Mesa 9.2.0
OpenGL shading language version string: 1.30
Unfortunately the desktop is very slow, it’s rendered by the Intel driver and put on the Nvidia card for display. I’ve tried changing priority in vga_switcheroo
prior to starting X, setting the DRI_PRIME=1
variable at boot, use xrandr to change the provider output source etc. to no avail; the desktop can run only on the first card or it doesn’t work. Usually I get a black screen upon GDM start.
There’s no power management as well, so the Intel card runs normally but the Nvidia one is always on and stuck in an intermediate performance level.
When docking it; I get cloned outputs on all external displays at a very low resolution. Same issue with the Optimus disabled Nouveau driver; the outputs need to be rearranged, the lid closed and the computer needs to be woken up from standby.
Optimus cards power operation
Dual cards can be shut down on demand through vga_switcheroo
. For example, login in your system as root without X running. Look at the card status with the following command:
# cat /sys/kernel/debug/vgaswitcheroo/switch
0:IGD:+:Pwr:0000:00:02.0
1:DIS: :Pwr:0000:01:00.0
This will tell you that the Integrated Graphics Display (IGD) is powered up (Pwr) and that is the primary display (+). To shut off the secondary video card, a single command is required:
# echo OFF > /sys/kernel/debug/vgaswitcheroo/switch
# cat /sys/kernel/debug/vgaswitcheroo/switch
0:IGD:+:Pwr:0000:00:02.0
1:DIS: :Off:0000:01:00.0
This will shutdown the Nvidia card. A look at the battery will tell you now that you have twice the power because the Intel card sucks very little power compared to the Nvidia one.
Turn the card on again, and switch the framebuffer console to it:
# echo ON > /sys/kernel/debug/vgaswitcheroo/switch
# cat /sys/kernel/debug/vgaswitcheroo/switch
0:IGD:+:Pwr:0000:00:02.0
1:DIS: :Pwr:0000:01:00.0
# echo DDIS > /sys/kernel/debug/vgaswitcheroo/switch
[ 879.436727] i915: switched off
# cat /sys/kernel/debug/vgaswitcheroo/switch
0:IGD: :Off:0000:00:02.0
1:DIS:+:Pwr:0000:01:00.0
This will move the framebuffer and your shell to the other Nvidia driven monitor and shut down the Intel card. Sweet, isn’t it?
Power management for automatic powerup/shutdown of cards in Optimus systems and runtime management will come in kernel 3.12; I’ve tested it by using the Fedora Rawhide kernel repository and the situation improves a lot:
# cat /sys/kernel/debug/vgaswitcheroo/switch
0:IGD:+:Pwr:0000:00:02.0
1:DIS: :DynPwr:0000:01:00.0
As you can see the second card is dynamically powered. Try to undock the system and check the status again: the second output is no longer needed so the second card shuts off:
# cat /sys/kernel/debug/vgaswitcheroo/switch
0:IGD:+:Pwr:0000:00:02.0
1:DIS: :DynOff:0000:01:00.0
Now, with the laptop undocked, launch a command on the second card:
# DRI_PRIME=1 vdpauinfo | grep -i string
Information string: G3DVL VDPAU Driver Shared Library version 1.0
# cat /sys/kernel/debug/vgaswitcheroo/switch
0:IGD:+:Pwr:0000:00:02.0
1:DIS: :DynPwr:0000:01:00.0
You will notice a slight delay before the command output is returned, but the card is powered on again! This is awesome. Now, after 1 or 2 seconds look again at the card:
# cat /sys/kernel/debug/vgaswitcheroo/switch
0:IGD:+:Pwr:0000:00:02.0
1:DIS: :DynOff:0000:01:00.0
It’s shut off! Dock the laptop again and the monitor should come up again.
Keep in mind that powering up and down cards is a totally different things than power managing and adjusting clocks etc. for a running card. This make the Nvidia card shutdown automatically, not regulate its power levels during usage.
Summary
A Prime enabled laptop does not have any configuration and does not require any manual configuration. The fact that the Nvidia card can power down itself is great and doubles my battery duration! On the screen I have KMS consoles without huge fonts and can have UEFI secure boot enabled! This is really awesome.
Unfortunately though, without proper Nouveau power management and performance improvements added to the fact that I need to reconfigure monitors everytime I move (sometime the output gets all black as well when docking); the experience is not that great. I don’t know why, but when I’m undocked and using only the LVDS internal panel, the Intel performance is fantastic. Problems arise only when it’s docked and Nouveau is enabled as well.
My old laptop was working flawlessly with Nouveau. I didn’t play games on it, it was not Optimus based and the driver was generally working better.
Optimus | Disabled | Disabled | Enabled | Enabled (Prime) |
Driver | Nvidia | Nouveau | Intel/Nvidia | Intel/Nouveau |
Configuration | Very easy. | Already set up. | Very complex | Already set up. |
Card power management | Perfect! | Poor performance, no power management. | Nvidia card always powered up, renders for all screens. | Dynamic video card switching works fine, Nouveau performance not. |
Optimus card power management | N.A. | N.A. | Nvidia card can't power down. | Perfect! |
Docking / Undocking | Perfect! | Manual intervention required | Manual intervention required, unreliable | Manual intervention required |
Performance | Perfect! | Pretty bad. | Very good, some tearing when moving windows. | Bad when using the Nvidia card for output, otherwise perfect! |
Bios Console | VGA, no KMS. | Perfect (KMS)! | Perfect (KMS on Intel). | Perfect (KMS)! |
UEFI Console | Uses efifb. Somewhat slow. | Perfect (KMS)! | Perfect (KMS on Intel). | Perfect (KMS)! |
UEFI secure boot | Can't work. | Perfect! | Can't work. | Perfect! |
Summing up, my current choice is for the Optimus disabled setup with Nvidia drivers. I can play games, dock, undock, power management works ok and I can drive all outputs easily. And if I need to go in a meeting I don’t need to be extra cautious in shutting down virtual machines, because the system might not go up again. It’s kinda retro style when booting with the text console and battery does not last more than 3 hours, but I can bear it.
I’m impressed by the current X.org improvements of the last years and really looking forward to new developments. Sometimes just for fun I often switch back to the free drivers to check the status; like the new dynamic power management in kernel 3.12.
Let’s hope Nvidia collaboration becomes better and the new documentation does not simply stop to what has been announced.
Thanks for the very informative post. I have thinking about buying E6430 (with Optimus) but after reading your post I have decided to find the same model but without Optimus.
Hey, excellent guide. It has helped me with my own Latitude laptop at work. Same model in fact!
So the issue I’m having is when I do the steps…
$ xrandr –setprovideroutputsource Intel NVIDIA-0
$ xrandr –auto
(I don’t do the left of portion, I use “below”)
If I move my mouse toward the laptop’s display, my two monitors scroll with it. I’m pretty sure it should not be doing this. Any suggestions?
I have dell E6430 too, after some hacking i have found a way to to make Nouveau/Optimus work properly on openSuSE 13.1 with 3.13 kernel (you have to put “xrandr –setprovideroutputsource nouveau Intel” in your .xinitrc file. This line makes nvidia appear in “xrandr –listproviders” twice.)
With 3.13 nvidia card powers up and down automaticly after plugiing in and out HDMI cable almost out of the box.
Only problem is, that there is no sound over HDMI – HDA Intel PCH does not have digital output, and there is no nvidia HDA sound card.
Does it look the same on F19?
slaanesh,
thanks for the information. I wish you were using Linux Mint 16 🙂
I have Asus N76VZ (Nvidia GT650, Intel 4000), Linux Mint 16 (Ubuntu 13.10), nvidia-prime and hdmi works great. However the laptop display stays black, I’ve mingled with xorg.conf etc, but all I managed was to get linux mint logo on it …
My situation is similar to @mid – I have everyhing running fine with intel (without HDMI) and bumblebee works with optirun etc (steam and so on). However I need HDMI to get my monitor to display 2560×1440 and I can’t get it to work. I was hoping that nvidia-prime will solve the issue, but gnome display manager does recognize the laptop screen.
Anyway you could help us out? I’m using xorg.conf provided by nvidia-prime.
I use my laptop as workstation so I don’t need battery life – just multiple screen,
one laptop display and one monitor via hdmi.
*gnome display manager does NOT* recognize laptop display when running via nvidia-prime and hdmi.
Hello,
I came across these two:
http://people.freedesktop.org/~cbrill/dri-log/?channel=nouveau&date=2014-09-24
“the outcome is, as long as you use some opengl thing you’ll have screen corruptions ”
https://bugs.freedesktop.org/show_bug.cgi?id=84298
However I can use this laptop with 4K over HDMI with nvidia drivers without problems. To use laptop in standalone I simply blacklist nvidia.
I’m sorry but I’m not following you:
– The bug is about using nouveau, not nvidia as you are using. I’m expecting all ports/configuration should work with the Nvidia driver. Btw HDMI is limited at 60hz in 4k.
– What does it mean “use laptop in standalone” and “blacklist nvidia”? In which case? I’m assuming you are using intel + nvidia, if you blacklist nvidia you can’t have output on the Intel card, as that’s used for rendering.
– Regarding the IRC log: there is no synchronization between the images rendered by the Nvidia GPU and the output device. This means that the output device can start reading the next frame of video while it is still being updated, producing a graphical artifact known as “tearing”. Tearing is currently expected due to limitations in the design of the X.Org X server. This is documented in Nvidia drivers (http://us.download.nvidia.com/XFree86/Linux-x86/346.35/README/randr14.html) and applies to open drivers as well.
Sorry for not coming here for a while.
Yes, the bug is for nouveau, I wanted to use it but due to ‘artefacts’ mentioned by mid is really not usable. So I do use nvidia. What I observed on nouveau was not tearing but huge lack of window redrawing.
Hdmi refresh in my case is limited to 30Hz@4k(UHD) due to hdmi1.4. But since I use 4k for static work then it if perfectly fine.
As you might know kernel 4.5 brings changes around drm for nouveau. I have just tested rawhide kernel on fedora 23 and must admit all works as expected (including 4k desktop on discrete gpu).
slaanesh,
thank you for this insightful post. It’s hard to get any concrete info on optimus setup on unix systems.
I’ve been battling with optimus (Asus N76VZ (Nvidia GT650, Intel 4000)) for quite some time now on Mint 16.
I can get all three outputs recognised with nouveau driver but the HDMI-one seems to have refresh issues causing artefacts when scrolling a document, viewing videos or moving mouse around.
I also tried to get it working with bumblebee (two seperate X-Servers) but can’t get the HDMI-output to work for the life of me. No multi-window scenario from their wiki applied to me.
I hope we’ll be able to use all three displays (in one X server) and keep the ability to shut down nvidia when not needed in near future :/
Hi slaanes!
What do you mean with “First of all install the proprietary driver as normal.”? I’m trying to apply your guide with my friend that has an Alienware m14x R2 but we’re stuck.
Thanks in advance
Ciao! Non hai scritto quale setup della pagina stai provando a replicare; dalla domanda mi sembra che tu voglia usare il driver proprietario per una configurazione Optimus… Comunque puoi installare il driver Nvidia come preferisci, dai setup ufficiali di Nvidia, dai package di RPMFusion o dal mio repository che trovi nel menù in alto.
My understanding (which could be wrong) is that with the release of the 3.12 kernel, the nvidia card will be able to shut itself down. Do you plan to revise this post with new information?
Hello, the post already contain information about the 3.12 kernel (look in the middle of the paragraph “Optimus cards power operation”); I will update it probably once kernel 3.13 nears release; as it will contain fan management and power reclocking for Nvidia cards.
Hi, you wrote:
> This one is a muxless laptop of the worst kind: video outputs are connected only to specific chips!
How can you tell what output is connected to which chips?
Xorg.0.log, messages and xrandr command output will tell you exactly what is connected to which.
On top of this, while experimenting, just try to run X.org with one of the two video cards shut off and some ports connected (eg. DVI + VGA). You will quickly understand which connector is driven by one card or the other.
Using Fedora 19 with KDE.. none of the above work.. had to switch to runlevel 3 then login and run startx.
Using proprietary drivers and xorg.conf above.
All tests suggest nvidia is active and running the desktop.
Performance not better than intel.
I resisted buying any laptop with discrete card for this very reason. I don’t do games and when I need power of CUDA I use my Bulldozer desktop with dual SLI Nvidia cards in Ubuntu. Sadly all I7 laptops today including my ASUS K55VM comes with nvidia discrete card. Ubuntu 12.04.3 implemented a solution for using nvidia 319 drivers with nvidia-prime. However this means nvidia runs permanently on the system and Intel integrated is used as a sink. The problem with this is temperature. The kernel 3.8 used still uses older cpu-freq and not Intel Pstate. Consequently Intel’s Thermal Daemon tool can’t be used as well. The CPU temps go up under load and doesn’t come down thanks to the GPU running and dumping its heat as well. Also idle temperatures are higher. Yes HDMI works well, games work well but then I don’t do games. I use my Chrome browser with all the GPU features set on. Also battery life is lower due to Nvidia. Ubuntu uses latest nvidia-prime with 319 drivers I can’t use Bumblebee as they are not compatible to switch the Nvidia card off. I had to switch to Manjaro, use latest kernel 3.11 which enables Intel Pstate drivers, I installed Intel Thermal Daemon from AUR. My system runs cooler now and the temps compare to Windows 8 at idle and in fact beats Windows at peak due to the superior Intel Thermal Daemon tool. Hopefully Kernel 3.12 will bring Windows like Optimus so I won’t need Bumblebee anymore. Manjaro sets up Kernel 3.11 with Nvidia 325 drivers and Bumblebee so I can use Nvidia when needed with primusrun or optirun.
Trying this on a Dell E6420 (not sure of the _exact_ differences, but port layout is similar).
The Fedora19 live image behaves ‘as expected’ alas poor performance on the 3rd screen (Built-in, VGA and DVI both on the dock).
Once installed, the three screens come up ‘correctly’ (but again, poor performance on 3rd). After updating the system, installing the nvidia drivers and rebooting into the updated system Gnome fails to start.
Will keep looking into it and update asap.
So, what’s happening is that the Nvidia module is not being built.
Figured as much by trying to compile the driver from the Nvidia provided binaries.
Haven’t found a resolution yet – Will have to hammer at it tonight!
Well, why don’t you just go to some prepackaged Nvidia driver like the one in RPMFusion or mine? This is already figured out and working.
Sorry, I wasn’t clear enough.
The first try was with your packaged driver, but gnome-shell was crashing off the get go.
Same result with the ones on rpm fusion.